Vision Transformers AI News List | Blockchain.News
AI News List

List of AI News about Vision Transformers

Time Details
10:35
Next-Token Prediction in Vision AI: New Training Method Drives 83.8% ImageNet Accuracy and Strong Transfer Learning

According to @SciTechera, a new AI training approach applies next-token prediction—commonly used in language models—to Vision AI by treating visual embeddings as sequential tokens. This method for Vision Transformers (ViTs) eliminates the need for pixel reconstruction or complex contrastive losses and leverages unlabeled data. Results show a ViT-Base model achieves 83.8% top-1 accuracy on ImageNet-1K after fine-tuning, rivalling more complex self-supervised techniques (source: SciTechera, https://x.com/SciTechera/status/2003038741334741425). The study also demonstrates strong transfer learning on semantic segmentation tasks like ADE20K, indicating that the model captures meaningful visual structures instead of just memorizing patterns. This scalable approach opens new business opportunities for cost-effective and flexible AI vision systems in industries such as healthcare, manufacturing, and autonomous vehicles.

Source